Guild icon
S3Drive
Community / support / Multiple S3 accounts with different encryption levels
Avatar
I'm testing out S3Drive on Windows, and was wondering if it was possible to have different mounts for different S3 endpoints with different encryption levels. My thought was:
  • unencryptedbucket1: used for things like notes, recipes, etc that I'd rather have stored as plain-text than encrypted
  • encryptedbucket2: used for things like health records, financial documents that I'd rather have stored encrypted than plain-text I was going to mount both of these at once, so have the Z: and Y: drives going in parallel. Is this setup possible? Or is it just one drive mounted at a time? It seems like encryption settings are also global and not per-bucket if I understand it correctly?
Avatar
Avatar
povey
I'm testing out S3Drive on Windows, and was wondering if it was possible to have different mounts for different S3 endpoints with different encryption levels. My thought was:
  • unencryptedbucket1: used for things like notes, recipes, etc that I'd rather have stored as plain-text than encrypted
  • encryptedbucket2: used for things like health records, financial documents that I'd rather have stored encrypted than plain-text I was going to mount both of these at once, so have the Z: and Y: drives going in parallel. Is this setup possible? Or is it just one drive mounted at a time? It seems like encryption settings are also global and not per-bucket if I understand it correctly?
Multiple mounts are not yet supported, but couple users already asked about. We've made a quick research and confirmed it requires some work, but it's not a huge deal. I've added a quick one to officially track it: https://s3drive.canny.io/feature-requests/p/support-multiple-disk-mounts It won't probably come sooner than end of this year though. "Start mount" feature mounts currently selected bucket/credentials. Encryption settings are separate for each bucket. In summary: "System", "View" and "Misc" are global. "Privacy" and "Bucket" are per each credentials. We're in the process of separating these settings out, so user clearly understands which one of them are global and which aren't.
👍 1
Avatar
Avatar
Tom
Multiple mounts are not yet supported, but couple users already asked about. We've made a quick research and confirmed it requires some work, but it's not a huge deal. I've added a quick one to officially track it: https://s3drive.canny.io/feature-requests/p/support-multiple-disk-mounts It won't probably come sooner than end of this year though. "Start mount" feature mounts currently selected bucket/credentials. Encryption settings are separate for each bucket. In summary: "System", "View" and "Misc" are global. "Privacy" and "Bucket" are per each credentials. We're in the process of separating these settings out, so user clearly understands which one of them are global and which aren't.
Thanks for the info and tracking that request formally! Do you have a recommended workflow currently for multiple buckets of mounting doesn't work?
Avatar
Avatar
povey
Thanks for the info and tracking that request formally! Do you have a recommended workflow currently for multiple buckets of mounting doesn't work?
Yep, you can mount multiple buckets outside of S3Drive. In order to do so, you'll need Rclone v1.6.5+ installed and then toggle off: "Use new mount experience" in S3Drive settings. This will make S3Drive "talk" via CLI to the Rclone binary, so you can lookup the exact commands. You mount bucket A and then go to logs to extract the exact command that can be executed locally. You could then stop it, switch to bucket B and do exactly the same. ... so now you've got two mount commands for separate buckets. Feel free to do some adjustments (e.g. final mount path / name change). Slightly cumbersome, but at least that way you can run more than one mount yourself which are S3Drive (including encryption) compatible. (edited)
Avatar
sorry, what I meant was that it seems there are two ways to interact with the files:
  • using the GUI (drag & drop, clicking the + sign)
  • mounting the folder and having S3Drive automatically pick up the changes
I wasn't sure if there was a folder inside AppData or something I could create a link to that S3Drive also monitors for files without actually mounting it as a virtual drive.
1:11 PM
my use-case would be to use it like a dropbox type of file backup/syncing application, so I'd like some folder watching to be happening. But perhaps that's not the best way to use s3drive 🙂
Avatar
Avatar
povey
sorry, what I meant was that it seems there are two ways to interact with the files:
  • using the GUI (drag & drop, clicking the + sign)
  • mounting the folder and having S3Drive automatically pick up the changes
I wasn't sure if there was a folder inside AppData or something I could create a link to that S3Drive also monitors for files without actually mounting it as a virtual drive.
I am not sure if I understood correctly, but the 3rd way of interacting with files would be to set up "Sync" from Local to Remove. In such case you would have a local folder (S3Drive on desktop supports file watching), where after a change it would trigger sync job and either copy, move or sync (depending on the selected mode) files to the remote. I believe it would work somewhat closely to Dropbox sync, with small caveat. Dropbox can do block level sync, whereas S3 and most cloud providers in general we can't do that, even if 1 bytes is changed within a file, we need to resync the whole file (we're exploring some possibilites of different FS structure on the cloud side to support that though). If you find any issue/bug with file watching (e.g. your change hasn't been picked up) on S3Drive I would appreciate if you could let us know. Thanks ! (edited)
Avatar
Avatar
Tom
I am not sure if I understood correctly, but the 3rd way of interacting with files would be to set up "Sync" from Local to Remove. In such case you would have a local folder (S3Drive on desktop supports file watching), where after a change it would trigger sync job and either copy, move or sync (depending on the selected mode) files to the remote. I believe it would work somewhat closely to Dropbox sync, with small caveat. Dropbox can do block level sync, whereas S3 and most cloud providers in general we can't do that, even if 1 bytes is changed within a file, we need to resync the whole file (we're exploring some possibilites of different FS structure on the cloud side to support that though). If you find any issue/bug with file watching (e.g. your change hasn't been picked up) on S3Drive I would appreciate if you could let us know. Thanks ! (edited)
ah it turns out I'm a total blockhead and missed this! The settings are not fully intuitive to me 😅 I think I got it now though! Something I'd like to see is the file window refresh automatically when a sync happens. I set up the sync, added a file, waited for the sync to happen, and it looks like it uploaded correctly (one bucket encrypted, one not) but I had to manually click the Refresh button in the S3 window to view the file list. Seems like something that could be done automatically when a sync occurs to keep the view state up to date 🙂
Avatar
file window refresh automatically when a sync happens. [...] but I had to manually click the Refresh button in the S3 window to view the file list.
This worked before we switched to Rclone sync, which is more reliable than our previous sync engine, but then there isn't direct connection between S3Drive <> Rclone. This feature will require us to dig into Rclone internals and create a bridge, so we can update S3Drive UI. Similar work is required to keep the Mount changes in-sync with the UI. We will get to that eventually: https://s3drive.canny.io/feature-requests/p/drive-mount-webdav-sync-changes-online-with-s3drive
👍 1
Avatar
I appreciate you listening to feedback, thanks! I'm actually using the S3 backend and not Rclone right now, although perhaps I should change that to use the Rclone 🙃
Avatar
Avatar
povey
I appreciate you listening to feedback, thanks! I'm actually using the S3 backend and not Rclone right now, although perhaps I should change that to use the Rclone 🙃
It's best to stay with S3 for S3 back-ends. App will internally use Rclone if it needs to.
👍 1
Avatar
If I Starting a synchronization task, is the E2E encrypted file synchronized to another storage provider's bucket still encrypted with the same password?
Avatar
Avatar
mix9311
If I Starting a synchronization task, is the E2E encrypted file synchronized to another storage provider's bucket still encrypted with the same password?
E2EE password is separate for each bucket. Once it's set it's used for that specific bucket only. You can set up sync between two different buckets each encrypted using different passwords. I hope that helps to understand scope of the E2E settings. (edited)
Avatar
I see
Exported 14 message(s)
Timezone: UTC+0